Constant Rate Approximate Maximum Margin Algorithms
نویسندگان
چکیده
We present a new class of perceptron-like algorithms with margin in which the “effective” learning rate, defined as the ratio of the learning rate to the length of the weight vector, remains constant. We prove that the new algorithms converge in a finite number of steps and show that there exists a limit of the parameters involved in which convergence leads to classification with maximum margin.
منابع مشابه
The Role of Weight Shrinking in Large Margin Perceptron Learning
We introduce into the classical perceptron algorithm with margin a mechanism that shrinks the current weight vector as a first step of the update. If the shrinking factor is constant the resulting algorithm may be regarded as a margin-error-driven version of NORMA with constant learning rate. In this case we show that the allowed strength of shrinking depends on the value of the maximum margin....
متن کاملBoosting Based on a Smooth Margin
We study two boosting algorithms, Coordinate Ascent Boosting and Approximate Coordinate Ascent Boosting, which are explicitly designed to produce maximum margins. To derive these algorithms, we introduce a smooth approximation of the margin that one can maximize in order to produce a maximum margin classifier. Our first algorithm is simply coordinate ascent on this function, involving a line se...
متن کاملOnline Learning of Approximate Maximum Margin Classifiers with Biases
We consider online learning of linear classifiers which approximately maximize the 2-norm margin. Given a linearly separable sequence of instances, typical online learning algorithms such as Perceptron and its variants, map them into an augmented space with an extra dimension, so that those instances are separated by a linear classifier without a constant bias term. However, this mapping might ...
متن کاملPerceptron-like large margin classifiers
We consider perceptron-like algorithms with margin in which the standard classification condition is modified to require a specific value of the margin in the augmented space. The new algorithms are shown to converge in a finite number of steps and used to approximately locate the optimal weight vector in the augmented space following a procedure analogous to Bolzano’s bisection method. We demo...
متن کاملThe Perceptron with Dynamic Margin
The classical perceptron rule provides a varying upper bound on the maximum margin, namely the length of the current weight vector divided by the total number of updates up to that time. Requiring that the perceptron updates its internal state whenever the normalized margin of a pattern is found not to exceed a certain fraction of this dynamic upper bound we construct a new approximate maximum ...
متن کامل